Regional District of Nanaimo
Privacy Ethics Alignment in AI: A Stakeholder-Centric Based Framework for Ethical AI
Barthwal, Ankur, Campbell, Molly, Shrestha, Ajay Kumar
The increasing integration of Artificial Intelligence (AI) in digital ecosystems has reshaped privacy dynamics, particularly for young digital citizens navigating data-driven environments. This study explores evolving privacy concerns across three key stakeholder groups, digital citizens (ages 16-19), parents/educators, and AI professionals, and assesses differences in data ownership, trust, transparency, parental mediation, education, and risk-benefit perceptions. Employing a grounded theory methodology, this research synthesizes insights from 482 participants through structured surveys, qualitative interviews, and focus groups. The findings reveal distinct privacy expectations: Young users emphasize autonomy and digital freedom, while parents and educators advocate for regulatory oversight and AI literacy programs. AI professionals, in contrast, prioritize the balance between ethical system design and technological efficiency. The data further highlights gaps in AI literacy and transparency, emphasizing the need for comprehensive, stakeholder-driven privacy frameworks that accommodate diverse user needs. Using comparative thematic analysis, this study identifies key tensions in privacy governance and develops the novel Privacy-Ethics Alignment in AI (PEA-AI) model, which structures privacy decision-making as a dynamic negotiation between stakeholders. By systematically analyzing themes such as transparency, user control, risk perception, and parental mediation, this research provides a scalable, adaptive foundation for AI governance, ensuring that privacy protections evolve alongside emerging AI technologies and youth-centric digital interactions.
Investigation of the Privacy Concerns in AI Systems for Young Digital Citizens: A Comparative Stakeholder Analysis
Campbell, Molly, Barthwal, Ankur, Joshi, Sandhya, Shouli, Austin, Shrestha, Ajay Kumar
The integration of Artificial Intelligence (AI) systems into technologies used by young digital citizens raises significant privacy concerns. This study investigates these concerns through a comparative analysis of stakeholder perspectives. A total of 252 participants were surveyed, with the analysis focusing on 110 valid responses from parents/educators and 100 from AI professionals after data cleaning. Quantitative methods, including descriptive statistics and Partial Least Squares Structural Equation Modeling, examined five validated constructs: Data Ownership and Control, Parental Data Sharing, Perceived Risks and Benefits, Transparency and Trust, and Education and Awareness. Results showed Education and Awareness significantly influenced data ownership and risk assessment, while Data Ownership and Control strongly impacted Transparency and Trust. Transparency and Trust, along with Perceived Risks and Benefits, showed minimal influence on Parental Data Sharing, suggesting other factors may play a larger role. The study underscores the need for user-centric privacy controls, tailored transparency strategies, and targeted educational initiatives. Incorporating diverse stakeholder perspectives offers actionable insights into ethical AI design and governance, balancing innovation with robust privacy protections to foster trust in a digital age.
Toward Ethical AI: A Qualitative Analysis of Stakeholder Perspectives
Shrestha, Ajay Kumar, Joshi, Sandhya
As Artificial Intelligence (AI) systems become increasingly integrated into various aspects of daily life, concerns about privacy and ethical accountability are gaining prominence. This study explores stakeholder perspectives on privacy in AI systems, focusing on educators, parents, and AI professionals. Using qualitative analysis of survey responses from 227 participants, the research identifies key privacy risks, including data breaches, ethical misuse, and excessive data collection, alongside perceived benefits such as personalized services, enhanced efficiency, and educational advancements. Stakeholders emphasized the need for transparency, privacy-by-design, user empowerment, and ethical oversight to address privacy concerns effectively. The findings provide actionable insights into balancing the benefits of AI with robust privacy protections, catering to the diverse needs of stakeholders. Recommendations include implementing selective data use, fostering transparency, promoting user autonomy, and integrating ethical principles into AI development. This study contributes to the ongoing discourse on ethical AI, offering guidance for designing privacy-centric systems that align with societal values and build trust among users. By addressing privacy challenges, this research underscores the importance of developing AI technologies that are not only innovative but also ethically sound and responsive to the concerns of all stakeholders.
Planet Mu's Newest Star Makes Club Music Imagined by Artificial Intelligence Thump
Antwood is the alias of Tristan Douglas, a producer, microbiologist, and all-around deep thinker hailing from Nanaimo, British Columbia. Some might remember his EP Work Focus from last year, an under-the-radar gem put out by net label B.YRSLF Division. It's a slice of footwork that's been pummeled and fractured into something that's neither here nor there, which is probably why it grabbed the attention of Planet Mu boss Mike Paradinas. Around that time, Douglas was still recording under the name Margaret Antwood, an admittedly lazy spoonerism on internationally-celebrated poet and novelist Margaret Atwood. "I thought it'd be funny to take some figure that's barely known in the public consciousness and do like a really crappy pun of her name," he says.